首页> 外文OA文献 >Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification
【2h】

Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification

机译:二值模式编码卷积神经网络纹理   识别和遥感场景分类

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Designing discriminative powerful texture features robust to realisticimaging conditions is a challenging computer vision problem with manyapplications, including material recognition and analysis of satellite oraerial imagery. In the past, most texture description approaches were based ondense orderless statistical distribution of local features. However, mostrecent approaches to texture recognition and remote sensing sceneclassification are based on Convolutional Neural Networks (CNNs). The d factopractice when learning these CNN models is to use RGB patches as input withtraining performed on large amounts of labeled data (ImageNet). In this paper,we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trainedusing mapped coded images with explicit texture information providecomplementary information to the standard RGB deep models. Additionally, twodeep architectures, namely early and late fusion, are investigated to combinethe texture and color information. To the best of our knowledge, we are thefirst to investigate Binary Patterns encoded CNNs and different deep networkfusion architectures for texture recognition and remote sensing sceneclassification. We perform comprehensive experiments on four texturerecognition datasets and four remote sensing scene classification benchmarks:UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with7 categories and the recently introduced large scale aerial image dataset (AID)with 30 aerial scene types. We demonstrate that TEX-Nets provide complementaryinformation to standard RGB deep model of the same network architecture. Ourlate fusion TEX-Net architecture always improves the overall performancecompared to the standard RGB network on both recognition problems. Our finalcombination outperforms the state-of-the-art without employing fine-tuning orensemble of RGB network architectures.
机译:设计具有辨别力的强大纹理特征以适应现实的成像条件是具有许多应用程序的挑战性计算机视觉问题,包括材料识别和卫星口影像的分析。过去,大多数纹理描述方法都是基于局部特征的密集无序统计分布。但是,最近的纹理识别和遥感场景分类方法都是基于卷积神经网络(CNN)。学习这些CNN模型时的实际做法是使用RGB色块作为输入,并对大量标记数据(ImageNet)进行训练。在本文中,我们展示了使用编码后的带有显着纹理信息的编码图像训练的二进制模式编码的CNN模型(代号TEX-Nets)为标准RGB深度模型提供了补充信息。此外,还研究了两种较深的体系结构,即早期融合和晚期融合,以结合纹理和颜色信息。据我们所知,我们是第一个研究二进制模式编码的CNN和不同的深度网络融合架构以进行纹理识别和遥感场景分类的人。我们对四个纹理识别数据集和四个遥感场景分类基准进行了综合实验:UC-Merced具有21个场景类别,WHU-RS19具有19个场景类别,RSSCN7具有7个类别以及最近推出的大规模航空图像数据集(AID)具有30个天线场景类型。我们证明了TEX-Net为同一网络体系结构的标准RGB深度模型提供了补充信息。与标准RGB网络相比,Ourlate融合TEX-Net架构在两个识别问题上始终可以提高整体性能。我们的最终组合在不采用RGB网络体系结构微调功能的情况下,胜过了最新技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号